28 research outputs found

    Exploring the Front Touch Interface for Virtual Reality Headsets

    Full text link
    In this paper, we propose a new interface for virtual reality headset: a touchpad in front of the headset. To demonstrate the feasibility of the front touch interface, we built a prototype device, explored VR UI design space expansion, and performed various user studies. We started with preliminary tests to see how intuitively and accurately people can interact with the front touchpad. Then, we further experimented various user interfaces such as a binary selection, a typical menu layout, and a keyboard. Two-Finger and Drag-n-Tap were also explored to find the appropriate selection technique. As a low-cost, light-weight, and in low power budget technology, a touch sensor can make an ideal interface for mobile headset. Also, front touch area can be large enough to allow wide range of interaction types such as multi-finger interactions. With this novel front touch interface, we paved a way to new virtual reality interaction methods

    IMAGE MANAGEMENT USING PATTERN RECOGNITION SYSTEMS

    Get PDF
    With the popular usage of personal image devices and the continued increase of computing power, casual users need to handle a large number of images on computers. Image management is challenging because in addition to searching and browsing textual metadata, we also need to address two additional challenges. First, thumbnails, which are representative forms of original images, require significant screen space to be represented meaningfully. Second, while image metadata is crucial for managing images, creating metadata for images is expensive. My research on these issues is composed of three components which address these problems. First, I explore a new way of browsing a large number of images. I redesign and implement a zoomable image browser, PhotoMesa, which is capable of showing thousands of images clustered by metadata. Combined with its simple navigation strategy, the zoomable image environment allows users to scale up the size of an image collection they can comfortably browse. Second, I examine tradeoffs of displaying thumbnails in limited screen space. While bigger thumbnails use more screen space, smaller thumbnails are hard to recognize. I introduce an automatic thumbnail cropping algorithm based on a computer vision saliency model. The cropped thumbnails keep the core informative part and remove the less informative periphery. My user study shows that users performed visual searches more than 18% faster with cropped thumbnails. Finally, I explore semi-automatic annotation techniques to help users make accurate annotations with low effort. Automatic metadata extraction is typically fast but inaccurate while manual annotation is slow but accurate. I investigate techniques to combine these two approaches. My semi-automatic annotation prototype, SAPHARI, generates image clusters which facilitate efficient bulk annotation. For automatic clustering, I present hierarchical event clustering and clothing based human recognition. Experimental results demonstrate the effectiveness of the semi-automatic annotation when applied on personal photo collections. Users were able to make annotation 49% and 6% faster with the semi-automatic annotation interface on event and face tasks, respectively

    A Labeling Task Design for Supporting Algorithmic Needs: Facilitating Worker Diversity and Reducing AI Bias

    Full text link
    Studies on supervised machine learning (ML) recommend involving workers from various backgrounds in training dataset labeling to reduce algorithmic bias. Moreover, sophisticated tasks for categorizing objects in images are necessary to improve ML performance, further complicating micro-tasks. This study aims to develop a task design incorporating the fair participation of people, regardless of their specific backgrounds or task's difficulty. By collaborating with 75 labelers from diverse backgrounds for 3 months, we analyzed workers' log-data and relevant narratives to identify the task's hurdles and helpers. The findings revealed that workers' decision-making tendencies varied depending on their backgrounds. We found that the community that positively helps workers and the machine's feedback perceived by workers could make people easily engaged in works. Hence, ML's bias could be expectedly mitigated. Based on these findings, we suggest an extended human-in-the-loop approach that connects labelers, machines, and communities rather than isolating individual workers.Comment: 45 pages, 4 figure

    CounterPoint: OZONE: A Zoomable Interface for Navigating Ontology

    No full text
    We present OZONE (Zoomable Ontology Navigator), for searching and browsing ontological information. OZONE visualizes query conditions and provides interactive, guided browsing for DAML (DARPA Agent Markup Language) ontologies. To visually represent objects in DAML, we define a visual model for its classes, properties and relationships between them. Properties can be expanded into classes for query refinement. The visual query can be formulated incrementally as users explore class and property structures interactively. Zoomable interface techniques are employed for effective navigation and usability. (Cross-referenced as UMIACS-TR-2001-16

    Can You Ever Trust a Wiki? Impacting Perceived Trustworthiness in Wikipedia

    No full text
    Wikipedia has become one of the most important information resources on the Web by promoting peer collaboration and enabling virtually anyone to edit anything. However, this mutability also leads many to distrust it as a reliable source of information. Although there have been many attempts at developing metrics to help users judge the trustworthiness of content, it is unknown how much impact such measures can have on a system that is perceived as inherently unstable. Here we examine whether a visualization that exposes hidden article information can impact readers ’ perceptions of trustworthiness in a wiki environment. Our results suggest that surfacing information relevant to the stability of the article and the patterns of editor behavior can have a significant impact on users ’ trust across a variety of page types

    Crowdsourcing user studies with Mechanical Turk

    No full text
    User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon’s Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach

    OZONE: A Zoomable Interface for Navigating Ontology Information

    No full text
    We present OZONE (Zoomable Ontology Navigator), for searching and browsing ontological information. OZONE visualizes query conditions and provides interactive, guided browsing for DAML (DARPA Agent Markup Language) ontologies. To visually represent objects in DAML, we define a visual model for its classes, properties and relationships between them. Properties can be expanded into classes for query refinement. The visual query can be formulated incrementally as users explore class and property structures interactively. Zoomable interface techniques are employed for effective navigation and usability. Keywords: Ontology, DAML, Browsing, Zoomable User Interface (ZUI), Jazz, WWW
    corecore